Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add filters

Language
Document Type
Year range
1.
JMIR Infodemiology ; 2(2): e38839, 2022.
Article in English | MEDLINE | ID: covidwho-2198093

ABSTRACT

Background: During the ongoing COVID-19 pandemic, we are being exposed to large amounts of information each day. This "infodemic" is defined by the World Health Organization as the mass spread of misleading or false information during a pandemic. This spread of misinformation during the infodemic ultimately leads to misunderstandings of public health orders or direct opposition against public policies. Although there have been efforts to combat misinformation spread, current manual fact-checking methods are insufficient to combat the infodemic. Objective: We propose the use of natural language processing (NLP) and machine learning (ML) techniques to build a model that can be used to identify unreliable news articles online. Methods: First, we preprocessed the ReCOVery data set to obtain 2029 English news articles tagged with COVID-19 keywords from January to May 2020, which are labeled as reliable or unreliable. Data exploration was conducted to determine major differences between reliable and unreliable articles. We built an ensemble deep learning model using the body text, as well as features, such as sentiment, Empath-derived lexical categories, and readability, to classify the reliability. Results: We found that reliable news articles have a higher proportion of neutral sentiment, while unreliable articles have a higher proportion of negative sentiment. Additionally, our analysis demonstrated that reliable articles are easier to read than unreliable articles, in addition to having different lexical categories and keywords. Our new model was evaluated to achieve the following performance metrics: 0.906 area under the curve (AUC), 0.835 specificity, and 0.945 sensitivity. These values are above the baseline performance of the original ReCOVery model. Conclusions: This paper identified novel differences between reliable and unreliable news articles; moreover, the model was trained using state-of-the-art deep learning techniques. We aim to be able to use our findings to help researchers and the public audience more easily identify false information and unreliable media in their everyday lives.

2.
JMIR infodemiology ; 2(2), 2022.
Article in English | EuropePMC | ID: covidwho-2047126

ABSTRACT

Background During the ongoing COVID-19 pandemic, we are being exposed to large amounts of information each day. This “infodemic” is defined by the World Health Organization as the mass spread of misleading or false information during a pandemic. This spread of misinformation during the infodemic ultimately leads to misunderstandings of public health orders or direct opposition against public policies. Although there have been efforts to combat misinformation spread, current manual fact-checking methods are insufficient to combat the infodemic. Objective We propose the use of natural language processing (NLP) and machine learning (ML) techniques to build a model that can be used to identify unreliable news articles online. Methods First, we preprocessed the ReCOVery data set to obtain 2029 English news articles tagged with COVID-19 keywords from January to May 2020, which are labeled as reliable or unreliable. Data exploration was conducted to determine major differences between reliable and unreliable articles. We built an ensemble deep learning model using the body text, as well as features, such as sentiment, Empath-derived lexical categories, and readability, to classify the reliability. Results We found that reliable news articles have a higher proportion of neutral sentiment, while unreliable articles have a higher proportion of negative sentiment. Additionally, our analysis demonstrated that reliable articles are easier to read than unreliable articles, in addition to having different lexical categories and keywords. Our new model was evaluated to achieve the following performance metrics: 0.906 area under the curve (AUC), 0.835 specificity, and 0.945 sensitivity. These values are above the baseline performance of the original ReCOVery model. Conclusions This paper identified novel differences between reliable and unreliable news articles;moreover, the model was trained using state-of-the-art deep learning techniques. We aim to be able to use our findings to help researchers and the public audience more easily identify false information and unreliable media in their everyday lives.

3.
Expert Syst Appl ; 212: 118746, 2023 Feb.
Article in English | MEDLINE | ID: covidwho-2007692

ABSTRACT

During the global fight against the novel coronavirus pneumonia (COVID-19) epidemic, accurate outbreak trend forecasting has become vital for outbreak prevention and control. Effective COVID-19 outbreak trend prediction remains a complex and challenging issue owing to the significant fluctuations in the COVID-19 data series. Most previous studies have limitations only using individual forecasting methods for outbreak modeling, ignoring the combination of the advantages of different prediction methods, which may lead to insufficient results. Therefore, this paper develops a novel ensemble paradigm based on multiple neural networks and a novel heuristic optimization algorithm. First, a new hybrid sine cosine algorithm-whale optimization algorithm (SCWOA) is exercised on 15 benchmark tests. Second, four neural networks are used as predictors for the COVID-19 outbreak forecasting. Each predictor is given a weight, and the proposed SCWOA is used to optimize the best matching weights of the ensemble model. The daily COVID-19 series collected from three of the most-affected countries were taken as the test cases. The experimental results demonstrate that different neural network models have different performances in various complex epidemic prediction scenarios. The SCWOA-based ensemble model can outperform all comparable models with its high accuracy and robustness.

SELECTION OF CITATIONS
SEARCH DETAIL